30 research outputs found

    Real-Time Recommendation of Streamed Data

    Get PDF
    This tutorial addressed two trending topics in the field of recommender systems research, namely A/B testing and real-time recommendations of streamed data. Focusing on the news domain, participants learned how to benchmark the performance of stream-based recommendation algorithms in a live recommender system and in a simulated environment

    Users' reading habits in online news portals

    Get PDF
    The aim of this study is to survey reading habits of users of an online news portal. The assumption motivating this study is that insight into the reading habits of users can be helpful to design better news recommendation systems. We estimated the transition probabilities that users who read an article of one news category will move to read an article of another (not necessarily distinct) news category. For this, we analyzed the users' click behavior within plista data set. Key findings are the popularity of category local, loyalty of readers to the same category, observing similar results when addressing enforced click streams, and the case that click behavior is highly influenced by the news category

    Overview of CLEF NEWSREEL 2014: News Recommendations Evaluation Labs

    Get PDF
    This paper summarises objectives, organisation, and results of the first news recommendation evaluation lab (NEWSREEL 2014). NEWSREEL targeted the evaluation of news recommendation algorithms in the form of a campaignstyle evaluation lab. Participants had the chance to apply two types of evaluation schemes. On the one hand, participants could apply their algorithms onto a data set. We refer to this setting as off-line evaluation. On the other hand, participants could deploy their algorithms on a server to interactively receive recommendation requests. We refer to this setting as on-line evaluation. This setting ought to reveal the actual performance of recommendation methods. The competition strived to illustrate differences between evaluation with historical data and actual users. The on-line evaluation does reflect all requirements which active recommender systems face in practise. These requirements include real-time responses and large-scale data volumes. We present the competition’s results and discuss commonalities regarding participants’ approaches

    Benchmarking News Recommendations in a Living Lab

    Get PDF
    Most user-centric studies of information access systems in literature suffer from unrealistic settings or limited numbers of users who participate in the study. In order to address this issue, the idea of a living lab has been promoted. Living labs allow us to evaluate research hypotheses using a large number of users who satisfy their information need in a real context. In this paper, we introduce a living lab on news recommendation in real time. The living lab has first been organized as News Recommendation Challenge at ACM RecSys’13 and then as campaign-style evaluation lab NEWSREEL at CLEF’14. Within this lab, researchers were asked to provide news article recommendations to millions of users in real time. Different from user studies which have been performed in a laboratory, these users are following their own agenda. Consequently, laboratory bias on their behavior can be neglected. We outline the living lab scenario and the experimental setup of the two benchmarking events. We argue that the living lab can serve as reference point for the implementation of living labs for the evaluation of information access systems

    The plista dataset

    Get PDF
    Releasing datasets has fostered research in fields such as information retrieval and recommender systems. Datasets are typically tailored for specific scenarios. In this work, we present the plista dataset. The dataset contains a collection of news articles published on 13 news portals. Additionally, the dataset comprises user interactions with those articles. We inctroduce the dataset’s main characteristics. Further, we illustrate possible applications of the dataset

    CLEF NewsREEL 2016: Comparing Multi-Dimensional Offline and Online Evaluation of News Recommender Systems

    Get PDF
    Running in its third year at CLEF, NewsREEL challenged participants to develop news recommendation algorithms and have them benchmarked in an online (Task 1) and offline setting (Task 2), respectively. This paper provides an overview of the NewsREEL scenario, outlines this year’s campaign, presents results of both tasks, and discusses the approaches of participating teams. Moreover, it overviews ideas on living lab evaluation that have been presented as part of a “New Ideas” track at the conference in Portugal. Presented results illustrate potentials for multi-dimensional evaluation of recommendation algorithms in a living lab and simulation based evaluation setting

    Benchmarking news recommendations: the CLEF NewsREEL use case

    Get PDF
    The CLEF NewsREEL challenge is a campaign-style evaluation lab allowing participants to evaluate and optimize news recommender algorithms. The goal is to create an algorithm that is able to generate news items that users would click, respecting a strict time constraint. The lab challenges participants to compete in either a "living lab" (Task 1) or perform an evaluation that replays recorded streams (Task 2). In this report, we discuss the objectives and challenges of the NewsREEL lab, summarize last year's campaign and outline the main research challenges that can be addressed by participating in NewsREEL 2016

    Dynamic generation of personalized hybrid recommender systems

    Get PDF

    Dynamic generation of personalized hybrid recommender systems

    Get PDF

    Information Retrieval and User-Centric Recommender System Evaluation

    Get PDF
    Traditional recommender system evaluation focuses on raising the accuracy, or lowering the rating prediction error of the recommendation algorithm. Recently, however, discrepancies between commonly used metrics (e.g. precision, recall, root-mean-square error) and the experienced quality from the users' have been brought to light. This project aims to address these discrepancies by attempting to develop novel means of recommender systems evaluation which encompasses qualities identified through traditional evaluation metrics and user-centric factors, e.g. diversity, serendipity, novelty, etc., as well as bringing further insights in the topic by analyzing and translating the problem of evaluation from an Information Retrieval perspective
    corecore